``Efficient” Subgradient Methods for General Convex Optimization

نویسندگان

چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

"Efficient" Subgradient Methods for General Convex Optimization

A subgradient method is presented for solving general convex optimization problems, the main requirement being that a strictly-feasible point is known. A feasible sequence of iterates is generated, which converges to within user-specified error of optimality. Feasibility is maintained with a linesearch at each iteration, avoiding the need for orthogonal projections onto the feasible region (an ...

متن کامل

Subgradient methods for convex minimization

Many optimization problems arising in various applications require minimization of an objective cost function that is convex but not di erentiable. Such a minimization arises, for example, in model construction, system identi cation, neural networks, pattern classi cation, and various assignment, scheduling, and allocation problems. To solve convex but not di erentiable problems, we have to emp...

متن کامل

Randomized Block Subgradient Methods for Convex Nonsmooth and Stochastic Optimization

Block coordinate descent methods and stochastic subgradient methods have been extensively studied in optimization and machine learning. By combining randomized block sampling with stochastic subgradient methods based on dual averaging ([22, 36]), we present stochastic block dual averaging (SBDA)—a novel class of block subgradient methods for convex nonsmooth and stochastic optimization. SBDA re...

متن کامل

Mirror descent and nonlinear projected subgradient methods for convex optimization

The mirror descent algorithm (MDA) was introduced by Nemirovsky and Yudin for solving convex optimization problems. This method exhibits an e3ciency estimate that is mildly dependent in the decision variables dimension, and thus suitable for solving very large scale optimization problems. We present a new derivation and analysis of this algorithm. We show that the MDA can be viewed as a nonline...

متن کامل

Inexact subgradient methods for quasi-convex optimization problems

In this paper, we consider a generic inexact subgradient algorithm to solve a nondifferentiable quasi-convex constrained optimization problem. The inexactness stems from computation errors and noise, which come from practical considerations and applications. Assuming that the computational errors and noise are deterministic and bounded, we study the effect of the inexactness on the subgradient ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: SIAM Journal on Optimization

سال: 2016

ISSN: 1052-6234,1095-7189

DOI: 10.1137/15m1027371